10 research outputs found

    Perspectives on scientific error

    Get PDF
    Theoretical arguments and empirical investigations indicate that a high proportion of published findings do not replicate and are likely false. The current position paper provides a broad perspective on scientific error, which may lead to replication failures. This broad perspective focuses on reform history and on opportunities for future reform. We organize our perspective along four main themes: institutional reform, methodological reform, statistical reform and publishing reform. For each theme, we illustrate potential errors by narrating the story of a fictional researcher during the research cycle. We discuss future opportunities for reform. The resulting agenda provides a resource to usher in an era that is marked by a research culture that is less error-prone and a scientific publication landscape with fewer spurious findings

    Crowdsourcing hypothesis tests: Making transparent how design choices shape research results

    Get PDF
    To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer fiveoriginal research questions related to moral judgments, negotiations, and implicit cognition. Participants from two separate large samples (total N > 15,000) were then randomly assigned to complete one version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: materials from different teams renderedstatistically significant effects in opposite directions for four out of five hypotheses, with the narrowest range in estimates being d = -0.37 to +0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for two hypotheses, and a lack of support for three hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, while considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim.</div

    Bridging Evolutionary Biology and Developmental Psychology: Toward An Enduring Theoretical Infrastructure

    Get PDF
    Bjorklund synthesizes promising research directions in developmental psychology using an evolutionary framework. In general terms, we agree with Bjorklund: Evolutionary theory has the potential to serve as a metatheory for developmental psychology. However, as currently used in psychology, evolutionary theory is far from reaching this potential. In evolutionary biology, formal mathematical models are the norm. In developmental psychology, verbal models are the norm. In order to reach its potential, evolutionary developmental psychology needs to embrace formal modeling

    Honest signaling in academic publishing

    Get PDF
    Academic journals provide a key quality-control mechanism in science. Yet, information asymmetries and conflicts of interests incentivize scientists to deceive journals about the quality of their research. How can honesty be ensured, despite incentives for deception? Here, we address this question by applying the theory of honest signaling to the publication process. Our models demonstrate that several mechanisms can ensure honest journal submission, including differential benefits, differential costs, and costs to resubmitting rejected papers. Without submission costs, scientists benefit from submitting all papers to high-ranking journals, unless papers can only be submitted a limited number of times. Counterintuitively, our analysis implies that inefficiencies in academic publishing (e.g., arbitrary formatting requirements, long review times) can serve a function by disincentivizing scientists from submitting low-quality work to high-ranking journals. Our models provide simple, powerful tools for understanding how to promote honest paper submission in academic publishing

    Not All Effects Are Indispensable: Psychological Science Requires Verifiable Lines of Reasoning for Whether an Effect Matters.

    No full text
    To help move researchers away from heuristically dismissing "small" effects as unimportant, recent articles have revisited arguments to defend why seemingly small effect sizes in psychological science matter. One argument is based on the idea that an observed effect size may increase in impact when generalized to a new context because of processes of accumulation over time or application to large populations. However, the field is now in danger of heuristically accepting all effects as potentially important. We aim to encourage researchers to think thoroughly about the various mechanisms that may both amplify and counteract the importance of an observed effect size. Researchers should draw on the multiple amplifying and counteracting mechanisms that are likely to simultaneously apply to the effect when that effect is being generalized to a new and likely more dynamic context. In this way, researchers should aim to transparently provide verifiable lines of reasoning to justify their claims about an effect's importance or unimportance. This transparency can help move psychological science toward a more rigorous assessment of when psychological findings matter for the contexts that researchers want to generalize to

    Crowdsourcing hypothesis tests: making transparent how design choices shape research results

    Get PDF
    To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer five original research questions related to moral judgments, negotiations, and implicit cognition. Participants from two separate large samples (total N > 15,000) were then randomly assigned to complete one version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: materials from different teams rendered statistically significant effects in opposite directions for four out of five hypotheses, with the narrowest range in estimates being d = -0.37 to +0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for two hypotheses, and a lack of support for three hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, while considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim
    corecore